Human-Agent Trust Weekly AI News
October 13 - October 21, 2025This weekly update reveals that trust is the missing piece that companies need to make AI agents work successfully. Even though AI technology is getting very powerful, many organizations are discovering that people will not use these tools unless they trust them first.
A research study in the military showed important findings about building trust with AI agents. Scientists tested whether giving explanations would help people trust AI decisions better. They found three key results. First, explanations helped users understand how the AI thinks. Second, people became better at knowing when to use the AI and when not to. Third, overall trust in the system went up. The researchers said this matters because throughout history, people refuse to use automation they do not trust. The study also found that users liked having both pictures and words to explain what the AI was doing.
Companies are also learning that trust requires more than just good technology. A report from consultants showed that businesses getting real results from AI have built strong trust inside their organizations first. This means trust is not just about the AI being smart - it is about the whole company believing in it and understanding how to use it properly.
However, there is a new security worry that could hurt trust in AI agents. These agents need special digital identities to access different systems and do their work. Security experts warn that these non-human identities often exist in blind spots where companies cannot see them well. Bad actors could steal these identities and cause serious harm. Gartner, a major research company, predicts that 33% of business applications will include AI agents by 2028, up from less than 1% in 2024. This rapid growth means companies must solve the security problems fast or risk losing trust.
In the insurance business, a big debate is happening about trust and the future of human agents. Some insurance leaders strongly believe that human relationships will always be necessary. They say customers want a trusted person they can ask questions and rely on for important decisions. These leaders think AI should help human agents work better, not replace them completely.
But other experts see things differently. SuperAgent AI, a company in San Francisco, believes AI agents will eventually replace human insurance agents. The company's founder admits this is not a popular opinion in the industry. He thinks many agents are being selfish by wanting to keep doing business the old way instead of adapting to what customers actually want. He points out that people already use AI assistants in many parts of their lives, so they will get comfortable using them for insurance too.
The insurance industry faces specific challenges with AI agent trust. For AI agents to sell insurance on their own, they would need to be licensed in all 50 states. Right now, regulations require humans to hold these licenses. SuperAgent AI is talking with major regulators about changing these rules. The company also acknowledges concerns about mistakes and liability. Their solution is to have AI agents work alongside humans for now, with humans reviewing the AI's work before it goes to customers.
Meanwhile, Salesforce, one of the world's biggest software companies, announced its vision for what it calls the Agentic Enterprise. This is a new way of working where AI agents help employees instead of replacing them. The company's leader, Marc Benioff, said AI should elevate human potential. Salesforce has already tested this approach with thousands of customer deployments. Their platform connects humans, AI agents, and data together in one system built on trust.
Real businesses are already seeing results from AI agents when trust is established. In customer service, some companies cut claim handling time by 40% using AI agents. In sales and marketing, one company increased lead conversions by 25% after implementing AI campaign tools. These successes show what becomes possible when organizations solve the trust problem.
Looking ahead, experts believe the next five years will focus on human factors rather than just making AI smarter. The technology is advancing quickly, but the real challenge is helping people understand how to use AI agents effectively and knowing when to trust their recommendations. This means companies need to invest in training, clear explanations, and security measures that protect both the technology and the people who rely on it.